Reinventing the wheel? Modelling temporal
uncertainty with applications to brooch
distributions in Roman-Britain
What follows is the pre-copy edited text of a paper that was published in its inal form as
Baxter, M.J. and Cool, H.E.M. 2016. ‘Reinventing the wheel? Modelling temporal uncertainty with
applications to brooch distributions in Roman Britain’, Journal of Archaeological Science 66, 120-27.
The version here includes the supplementary material originally published online. The published paper
is available at:- http://www.sciencedirect.com/science/article/pii/S0305440315003234
There are two other papers which provide additional information.
The parent paper with all the data is Cool, H.E.M. and Baxter, M.J. 2016. ‘Brooches and Britannia’,
Britannia 47, 71-98. doi: 10.1017/S0068113X16000039
Cool, H.E.M. and Baxter M.J. 2016. ‘Exploring morphological bias in metal-detected inds’, Antiquity
90 (issue 354), 1643-53.
This paper explores one aspect of bias in the data which the research uncovered.
0.005
0.010
South
North
0.000
density
0.015
Region
−50
0
50
100
150
200
date
250
300
350
400
450
Reinventing the wheel? Modelling temporal uncertainty
with applications to brooch distributions in Roman Britain
M. J. Baxtera1 , H. E. M. Coolb
a
Nottingham Trent University (Emeritus), 16 Lady Bay Road, West Bridgford,
Nottingham NG2 5BJ, UK.
b Barbican Research Associates, 16 Lady Bay Road, West Bridgford, Nottingham NG2
5BJ, UK.
Abstract
A simple, simulation-based model of temporal uncertainty is presented that embraces
other approaches recently proposed in the literature, including those more usually involving mathematical calculation rather than simulation. More specifically, it is shown how
the random generation of dates for events, conditioned by uncertain temporal knowledge
of the true date, can be adapted to what has been called the chronological apportioning
of artefact assemblages and aoristic analysis (as a temporal rather than spatio-temporal
method). The methodology is in the same spirit – though there are differences – as
that underpinning the use of summed radiocarbon dates. A possibly novel approach to
representing temporal change is suggested. Ideas are illustrated using data extracted
from a large corpus of late Iron Age and Roman brooches, where the focus of interest
was on their temporal and regional distribution over a period of about 450 years.
KEYWORDS: aoristic, brooches, Roman, simulation, modelling, temporal change, temporal uncertainty
1
Introduction
In a review paper, Modelling Temporal Uncertainty in Archaeological Analysis, Crema
(2012: 441) ‘tackles the thorny but surprisingly neglected problem of both the quantification of temporal uncertainties and their integration into archaeological analysis’ (our
emphasis). The present paper examines ways in which this problem might be addressed.
It is shown that several recent proposals for modelling temporal uncertainty can be understood within the framework of a simple model of temporal variation. Some detailed
illustrative examples are provided, primarily for a corpus of 10,921 brooches from late
Iron Age and Roman Britain.
Temporal variation is probabilistically modelled when radiocarbon dates are reexpressed in terms of a probability distribution over a range of calendar dates. Applications are widespread; Crema (2012: 141) largely excludes applications of this kind,
where secure absolute dating evidence is available, from his review, and we follow suit.
Crema does not intend to imply that archaeologists neglect time; rather that ‘we have
often neglected the role of time in our quantitative methods, failing to integrate formally the temporal dimension as part of other analyses’ (our emphasis), and that ‘our
assessment of time is often restricted to introductory and concluding narratives’.
1
Communicating author
1
Our interest in this problem stems from an investigation of regional differences in
brooch use patterns in late Iron Age and Roman Britain (Cool and Baxter, 2016). Our
realisation that the modelling approach developed there could be used to reinterpret
other recently proposed methods prompted the present paper. The background to that
study is presented in Section 2. Section 3 expands on the methodology, forming the
basis for the illustrative applications of Section 4. Other methodologies that have been
proposed recently are explicitly related to our approach in Section 5. Section 6 concludes
the paper.
The appendix includes supplementary material that provides additional mathematical and computational background. Version 3.1.2 of R (R Core Team, 2015) was used
for all analyses.
2
Archaeological background and data
Table 1, taken from Cool and Baxter (2016), is based on information extracted from
Mackreth’s (2011) corpus Brooches in Late Iron Age and Roman Britain.
A brooch can be assigned to a type and the region in which it was found. Many of
the brooches were not recovered from dated contexts, but the type to which they belong
can be assigned a terminus post quem (TPQ) based on the earliest date for brooches
of the type found from dated contexts. Several types may be associated with the same
TPQ and have been grouped together in Table 1. That is, the entries refer to counts of
brooches from a region whose type has a particular TPQ with type differences for that
TPQ being ignored. The period labels are conventional, named, for the most part, after
the more memorable emperors or their families who dominated the period.
Cool and Baxter (2016) aimed to compare the regional and temporal distributions
of brooch use/loss – as evidenced by the archaeological record. Several problems were
immediately encountered because of temporal uncertainty associated with the date of
brooch loss, the pattern of brooch loss, and what will be called the life-span of a type.
The life-span is determined by the terminus ante quem (TAQ), often not known with
any precision. Some types were probably short-lived (10–15 years); for other types a lifespan of over 100 years is feasible. Some archaeologists (of whom Mackreth (2011) was
one) work with what might be called a prior belief that brooch type life-spans are short,
so that any late occurrence of a type is treated as de facto evidence of residuality. This
means that competing archaeological interpretations of the ‘objective’ archaeological
record contribute an extra layer of temporal uncertainty to any analysis.
The pattern of loss is impossible to quantify with certainty. In cognate applications
(Section 5), the assumption of a uniform distribution is common. This can be a reasonable assumption – essentially acknowledging total ignorance of the pattern (Crema,
2012). For artefact loss it seems qualitatively inappropriate; for example, the ‘popularity’ of a brooch type is more likely to rise to a peak and then decline. How this
qualitative idea can be modelled is discussed below.
2
Table 1: Regional counts of late Iron Age and Roman brooches by TPQ and period. The
regions are EA = East Anglia, N = North, S = South, EM = East Midlands, WM = West
Midlands, SW = South-West. The periods are IA = Iron Age, AT = Augusto-Tiberian, CN =
Claudio-Neronian, FL = Flavian, TH = Trajanic-Hadrianic, AS = Antonine-Severan, L = Late.
TPQ
-70
-50
-30
-20
-5
1
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
95
100
105
110
115
120
125
135
140
145
150
155
160
170
175
180
185
190
200
210
215
275
290
310
340
350
370
400
EA
25
25
13
0
6
62
2
173
9
38
3
32
247
31
82
190
297
68
55
29
89
130
161
70
4
10
52
12
10
36
8
21
23
34
10
29
13
22
7
16
68
44
15
19
23
3
5
3
8
5
4
N
0
1
2
1
0
11
0
14
0
4
0
0
26
2
8
8
24
4
10
11
58
70
89
73
5
1
37
25
6
10
7
17
15
50
2
21
18
3
7
3
16
25
19
33
2
7
3
3
5
0
0
S
88
107
49
7
24
141
19
532
26
277
13
56
561
72
203
106
182
68
74
27
106
57
73
26
5
12
58
18
7
38
6
19
11
37
8
15
31
12
15
12
40
28
9
33
3
7
6
3
12
3
7
EM
20
29
11
1
3
64
5
157
7
51
0
20
176
41
54
77
119
22
45
22
90
79
142
18
5
2
37
11
17
15
2
7
4
18
7
10
5
10
6
0
16
14
3
8
2
2
1
1
2
0
2
3
WM
3
3
1
0
0
30
0
37
3
15
0
1
50
5
20
28
34
19
64
10
53
35
63
10
13
1
40
13
9
11
3
7
0
11
1
6
2
0
4
0
11
3
0
1
2
1
1
3
5
2
2
SW
24
26
25
20
1
95
82
119
3
81
1
14
319
8
339
142
98
48
146
17
140
94
145
22
4
14
49
32
17
53
6
5
15
39
6
14
14
13
6
6
18
16
5
11
11
0
2
1
6
5
2
Period
IA
IA
IA
IA
IA
AT
AT
AT
AT
AT
AT
AT
AT
CN
CN
CN
CN
CN
CN
FL
FL
FL
FL
FL
TH
TH
TH
TH
TH
TH
TH
TH
AS
AS
AS
AS
AS
AS
AS
AS
AS
AS
AS
AS
L
L
L
L
L
L
L
3
3.1
Methods – modelling temporal uncertainty
Sampling models
Define an event of interest, ei – brooch loss in the previous section. There are n events; ei
occurs between a TPQ, τ1i , and TAQ, τ2i ; call the difference between them the life-span
of the event, Li . The date of occurrence of ei is unknown.
Events can be characterised by a set of attributes, for example (Type, TPQ, TAQ,
Region) for the brooch data. It is convenient to introduce the concept of event size. If
a set of events is identical in all respects relevant to an analysis they can be grouped
together and ‘Size’ may be treated as an event attribute. In Table 1 TPQ is a surrogate
for type (ignoring any type differences where they have the same TPQ) so type becomes
a redundant attribute. The entries in Table 1 are the sizes of events from the same
region having the same TPQ. This is not restrictive since the date of loss of a brooch
can be treated as a unique event with a size of 1 if appropriate.
An assumption in what follows is that events are independent; this would be violated if, for example and in the context of the brooch data, a subset of brooches that
constituted an event came from the same site and had a known stratigraphic relationship. Very few of the brooches in the sample belong to such subsets so the development
below assumes independence. More generally, a reviewer has suggested that where this
is an issue knowledge of stratigraphic information can be integrated into the simulation
methodology used.
With this in place the idea is to simulate a set of n random dates, di , using the model
di = τ1i + Li × pi
(1)
where pi is sampled from a probability distribution, Φi (x), lying between 0 and 1.
Assuming, temporarily, that τ2i and hence Li are known; practical implementation
devolves to choosing Φi (x) to generate dates. A flexible model is the Beta distribution
(see the Appendix). For immediate purposes it is sufficient to know that it is defined
over the range [0, 1] and depends on two shape parameters, α and β. If α = β = 1 a
uniform distribution is obtained. For α and β > 1 a unimodal distribution with a mode
at M where
α−1
M=
α+β−2
is obtained. This is a symmetric distribution with M = 0.5 if α = β = 2. The
concentration about the mode increases as the parameter values increase. This can be
thought of as expressing the belief that brooch loss peaks half-way through the life-span
of a type.
If we believe that brooch loss peaks before its half-life then β > α > 1 generates
M < 0.5. If α = 2, M = 1/β; for example, β = 4 models the idea that peak loss occurs
a quarter of the way through the life-span of a type. The Beta distribution can be used
to model specific beliefs about the nature of the temporal distribution of ei and is used
in this paper unless otherwise stated. Some illustrations are provided in the appendix.
The protocol described generates a sample of n dates, hence the term ‘sampling
model’, which need to be summarised in some way. Graphical possibilities include moving averages, non-parametric regression curves or, as we choose here unless otherwise
stated, kernel density estimates (KDEs).
4
3.2
‘Population’ models
An alternative perspective, here termed the ‘population’ model is to re-write model (1)
as
fi (d) = τ1i + Li Φi (x)
(2)
where fi (d) is the probability distribution for the date of an event. These can thought of
as kernels that are summed and rescaled to get a KDE. This idea underpins applications
using summed radiocarbon calibrated dates (Section 5.1). It can be thought of as a
‘population’ model in the sense that replicating the date set K times for large K will
generate a set of dates that can be treated as a ‘population’; applying model (1) to these
approximates the results obtained from model (2) (Sections 4 and 5 illustrate).
Thus the ‘population’ model formulation can be subsumed within the sampling model
approach. For tractable definitions of Φi results from model (2) can be obtained mathematically without recourse to simulation. The relative merits of the two approaches are
discussed in Section 5.2.
3.3
Practicalities – model implementation
Fairly typically, τ1i and τ2i are assumed known or subject to uncertainty that is small
in relation to the life-span of an event. In applications to the brooch data of Table 1
the TAQs of a type are mostly rather uncertain, denying the luxury of treating Li as
known. It thus needs to be modelled or assumed. Cool and Baxter (2016) initially
assumed that all types had the same life-span (Li = L), and the effect of varying
L was explored. This can be turned to advantage as it explicitly involves a multiscalar approach that acknowledges that perception of pattern varies with the temporal
(or spatial, or spatio-temporal) resolution used (e.g., Bevan and Connolly, 2006, 2009;
Crema, 2012; Palmisano, 2013; Markovsky, 2014).
The use of KDEs (Baxter et al., 1997; Baxter, 2003: 29–37) also involves an element
of ‘multi-scalarity’ since the picture obtained is dependent on the choice of bandwidth
(equivalent to the choice of bin-width in a histogram). Smaller bandwidths reveal more
detail and larger ones emphasise the broader patterns; taken to extremes an uninterpretable and/or uninformative picture emerges.
The KDE obtained will clearly depend on the specific sample generated. The obvious
way of dealing with this is to generate a set of simulated dates and overlay these, providing a good idea of the sampling variability involved and, possibly, more interpretable
pictures than a single KDE provides. The general idea has been emphasised in some
recent literature (e.g., Bevan, Crema et al., 2013; Crema, 2015) and is illustrated in
Section 4.
Any picture, and its interpretation, will depend on the probability model assumed.
An obvious approach, yet again, is to explore the consequences of using different models.
What is important is whether or not substantive archaeological interpretation changes
according to the assumptions involved. In a kind of Bayesian spirit this can be thought
of as a form of sensitivity analysis as prior beliefs about the life and ‘life-style’ of an
entity are explored.
5
3.4
Aspects of uncertainty
‘Uncertainty’ manifests itself in various ways, and both reviewers have asked that we
clarify how the models described above might handle different kinds of uncertainty.
Specifically, and adopting the terminology of Cool and Baxter (2016), there is uncertainty
surrounding (a) the life-span of a single artefact, (b) the life-span of an artefact type (i.e.
Li ), (c) the assignation of a type to a particular time period, and (d) the distribution of
use over a time period.
The models described above require that Li be specified and this can be done either
purposively or randomly as Figure 1 illustrates. The data used for the figure, from
Table 1, involves events with a size n ≥ 1 but, in principle, the model can be used if
n = 1 for all n, covering case (a) above, if one has sufficient information. How practical
this is will depend very much on context; for our application it is only necessary to assign
an artefact to a type (based on Mackreth’s (2011) comprehensive study) and types to a
period defined, by the TPQ about which we are reasonably confident. That is, it is not
necessary to know too much else about an individual artefact, though should detailed
knowledge exist it can be exploited.
The assumptions concerning Li and use-patterns over time, (d) above, generate a random sample of dates of brooch loss from which inferences about temporal and regional
variation can be explored, repeated simulation and variation of assumptions allowing the
robustness of inferences to be assessed. We have mostly used the Beta distribution to
model patterns of brooch loss. That, over its life-span, an artefact type increases then
declines in popularity is an idea that many – if not most – specialists would be comfortable with and, for example, this kind of ‘model’ underpins many seriation methods
in common use. If an event is such that one is genuinely ignorant about its temporal
distribution, within defined limits, or unwilling to make assumptions, then the uniform
distribution, which is a special case of the Beta distribution, is a natural choice.
To elaborate a little, Cool and Baxter (2016) explored other approaches to dealing
with the uncertainty associated with the TPQ, τ1i , and the TAQ, τ2i . The TPQs are
known with reasonable confidence, but there is still some uncertainty attendant on them.
This was investigated in Cool and Baxter by simulating values for τ1i over the range
(τ1i − c, τ1i ) using a uniform distribution for a suitably chosen constant c (which can
be varied). The TAQ can be similarly treated. This might be thought of as akin to
the use of hyperpriors in Bayesian models, where the parameters of a prior distribution
are themselves modelled by a prior distribution. The main approach used in Cool and
Baxter (2016) was to vary the assumptions about the TAQ to see what effect this has
on conclusions. Using values up to, and beyond, what most specialists would consider
realistic, substantive conclusions about temporal and regional variation are surprisingly
robust.
Some other approaches to dealing with uncertainty, explored in the literature, may
be noted briefly. Bevan, Connolly et al. (2013) deal with a problem where assignation of
artefacts to periods is rather less certain that in our application. Taking conventionally
and broadly defined periods as ‘given’, they use the subjective judgments of several specialists to assign artefacts to periods with different degrees of ‘confidence’ (summing to
100%). Towards the other end of what might be thought of as a ‘scale of uncertainty’,
where a parametric model is proposed for (in our example) life-span there is the possibility of estimating, rather than assuming, the parameters if a sufficient number of ‘exact’
6
dates are available. This isn’t a practical option for our data but the idea is exploited
in Manning et al. (2014: 1073, 1078) who model the ‘broad-scale “temporal shape” of
Neolithic cultures’ using a Gaussian model for summed probability distributions based
on radiocarbon dates (Section 5.1) and suggest other unimodal distributions might also
be used.
4
Applications
4.1
Brooches – the sampling model
For illustration, the data for the South (3349 brooches) and North (756) from Table 1
will be used. Initially assume that Li = L = 100 and α = β = 2. Applying model (1)
with 99 simulations results in Figure 1a.
0.010
Assumed life−span, 100 years. Beta model α=2, β= 2
Variable life−span, 10−100 years. Beta model α=2, β= 2−5
Region
Region
density
0.000
0.000
0.002
0.005
0.010
0.008
0.006
0.004
density
South
North
0.015
South
North
−50
0
50
100
150
200
250
300
350
400
450
−50
date
0
50
100
150
200
250
300
350
400
450
date
(a)
(b)
Figure 1: Density plots of brooch use over time, for the South and North regions (Table 1), from
99 simulations assuming a life-span of 100 years The left-hand plot is modelled using a life-span
of 100 years and a Beta distribution with α = 2 and β = 2; that to the right uses randomly
assigned life-spans between 10–100 years, and Beta distributions with α = 2 and β randomly
sampled between 2 and 5.
The comparison is between relative densities of use within regions over time. The
message – archaeologically not a surprise – is that significant brooch use starts and
declines earlier in the South than the North. The pattern of rise and fall for the South
is mirrored, with time shifts, in most other regions (Cool and Baxter, 2016); the North
departs from this in that the decline is arrested in the first half of the third century
AD. The greater uncertainty associated with the pattern for the North, manifest in the
thicker banded structure, is attributable to the smaller sample size.
For the most part we lack the knowledge to quantify the life-span of types with any
certainty, though they undoubtedly vary. Similarly, quantifying the pattern of brooch
loss with any certainty is not possible, except that we incline to the belief that peak
brooch loss is likely to occur before the half-life of a type. To explore the consequences
7
of this the life-span and peak-brooch loss at each TPQ were randomly varied. For illustration, Figure 1b shows the result of sampling Li randomly from a uniform distribution
bounded by 10 and 100 years, fixing α = 2, and randomly sampling β from a uniform
distribution between 2 and 5.
The inter-regional differences are still apparent but Figure 1b shows more sampling
variability than Figure 1. This becomes problematic when other regions, lying in an
intermediate position, are plotted, since patterns in the data are obscured by overlapping
(Cool and Baxter, 2016). One way of avoiding this is to plot ‘simulation envelopes’ that
strip out some percentage of the more extreme simulations. This is not needed for
Figure 1 as the pattern remains clear, but is illustrated in Cool and Baxter (2016)
following methodology described in Bevan et al. (2013).
Figure 1b differs from Figure 1a in the pattern of intra-regional variation for the
North where two modes are now apparent, a little after AD 150 and in the first quarter
of the third century AD. This arises because these are two periods where new types
were introduced, with a noticeably greater presence than in the years either side of them
(Table 1). The modelling assumptions are such that relatively few dates are generated for
the TPQs concerned that overlap between the two periods and, with not much happening
in between, the modes emerge. With assumptions allowing for greater overlap the modes
coalesce, as in Figure 1a.
This is illustrative of the multi-scalar approach, and invites interpretation which
is interesting from both a statistical and archaeological perspective. From the latter
point of view one possible explanation for the pattern in Figure 1b (which we think is
unsustainable – Cool and Baxter, 2016) is that it reflects patterns of military occupation
in the North in the latter half of the Roman era. Another possibility is that the hiatus in
brooch use suggested in Figure 1b, and thus the modelling assumptions that generated it,
is unrealistic. Other possibilities are discussed in Cool and Baxter (2016), among them
the possibility that observed patterns may be partially affected by modern recovery
practices such as metal-detection. The modelling approach both draws attention to
phenomena that merit explanation and makes explicit the assumptions that can give
rise to different perceptions of pattern.
Statistically, as the assumptions increasingly restrict the dates at which brooch loss
can occur, by ‘pushing’ simulated dates closer to the TPQ of a type, the density plot becomes ‘bumpier’ and less interpretable. This makes regional comparisons more difficult
and implies, unrealistically, that all types ‘expire on birth’. Smoother, more interpretable
plots are obtained by allowing types a reasonable life. In adopting a multi-scalar approach, realistic bounds are needed for the scales of temporal resolution explored; what
is realistic may not be obvious!
4.2
Brooches – the ‘population’ model
Thus far the sampling model has been used. The population model is now employed.
Suppose it is of interest to investigate brooch use over time within the different periods of
Table 1 and, more specifically, estimate the proportional distribution of brooch use/loss
across periods for regions.
Figure 2 illustrates the general scheme. Three events are shown, the life-spans of
which overlap each other and two or more of the periods defined. The problem is to
determine the proportion of the distribution of an event that occurs within each of the
8
periods that it overlaps – what Crema (2012: 446) calls the ‘probability of existence’.
If a uniform distribution is assumed it is straightforward to calculate the proportions
mathematically. As already discussed, a similar result can be obtained via the sampling
model if K copies of the event are generated, for large enough K. If K dates are simulated
for an event, the counts falling within each period are obtained and and converted to
probabilities on division by K. What constitutes a ‘large’ value of K can be investigated
in context by experimentation.
70
3
10
110
event
2
30
140
1
60
Period A
0
20
Period B
40
60
80
Period C
100
120
140
date (AD)
Figure 2: A schematic representation of the problem of apportioning the probability distribution
of events across periods.
Table 2: The estimated ‘population’ distribution (%) of brooch loss across periods by region
based on 100 replicates of Table 1. The assumed life-span is 100 years with α = β = 2 in the
Beta distribution model.
SW
S
EA
EM
WM
N
IA
2
4
1
2
1
0
AT
5
9
4
6
4
1
Period
CN FL
14
25
20
27
11
22
14
23
9
19
3
8
TH
33
24
33
33
37
31
AS
15
9
17
16
24
36
L
5
6
11
5
6
21
Table 2 shows the results of applying model (1) to Table 1 with numbers multiplied
by 100 (i.e. 100 replicates) when results stabilise. Archaeologically, much of what can
be inferred from Table 2 can be inferred from plots similar to those in Figure 1, with
9
the addition of the other regions (Cool and Baxter, 2016). Table 2, however, provides a
concise numerical summary.
5
5.1
Related methodologies
Summed radiocarbon calibrated dates
A radiocarbon date for an event, call it τi , is converted via calibration into a distribution
of calendar dates, fi (d; τi ). Given a large enough number of dates relating to a long span
of time these distribution may be summed and scaled to obtain what is, in effect, a KDE
which, it is hoped in many applications, can be interpreted, as Contreras and Meadows
(2014: 591) put it, as a ‘valid prox[y] for human populations’. Use of this methodology
is widespread (Williams, 2012). It is identical in spirit to the ‘population’ model (2),
though differing in the nature of the prior information and the way the date distribution
is obtained, directly, via the calibration procedure.
The main intention in this paper is to note the connection to the models proposed
in Section 3, but it may be observed that there is a current, sometimes quite vigorous,
debate taking place about the merits of the methodology. Much of the critical literature
on the subject (e.g., Williams, 2012; Contreras and Meadows, 2014; Manning et al.,
2014; Woodbridge et al., 2014; Torfing, 2015a,b; Timpson et al., 2015) is concerned with
evaluating whether or not summed radiocarbon dates are indeed reasonable proxies
for the temporal distribution of past human populations. The potential effects of the
calibration procedure are not relevant here. Taphonomic issues – the nature of the
survival and recovery of entities and the extent to which they are representative of past
‘target’ populations (in the statistical sense) – are important. Discussion is beyond the
scope of the present paper but see Surovell and Brantingham (2007) and Surovell et al.
(2009). Also worth noting, though also not for detailed discussion, is Grove (2011) who
allies the combination of radiocarbon dates – explicitly via KDE methods – with spatial
modelling as an approach to reconstructing land-use distributions through time.
Of more immediate interest, and the subject of discussion in some of these papers,
is the issue of sample size. The observed τi are a sample; even if there are no other
issues the question of what sample size is needed to obtain a good (human) population
proxy arises. Williams (2012) suggests, on the basis of sub-sampling from a large number
of dates, that a sample of at least 500 is needed; Contreras and Meadows (2014) take
issue, suggesting that, depending on the time scale involved and the changing density
of τi over the range, even samples of 2000 may not be adequate. Shennan et al. (2013)
and particularly Timpson et al. (2014), in turn, take issue with the approach used by
Contreras and Meadows and develop a methodology, heavily dependent on statistical
hypothesis testing and simulation ideas, intended to show that valid inferences can be
drawn from much smaller samples than suggested in the other papers cited above. The
rather polarized debate subsequently generated by this claim, in the papers by Torfing
(2015a,b) and Timpson et al. (2015), can be read more as arguments about whether the
pre-conditions necessary for the methodology to be useful apply than the technicalities
of the methodology itself.
The lesson to be drawn from this is probably that it is dangerous to be prescriptive.
Section 5.2 explores how such questions might be addressed using the sampling model
of Section 3.1.
10
5.2
Chronological apportioning of artefact assemblages
Roberts et al. (2012) describe a method for the chronological apportioning of ceramic
assemblages. Their concern is to make temporal comparisons between long-inhabited
sites characterized by ceramic wares from multiple time periods. Their illustrative example (their Table 3) focuses on nine events (ceramic types) with sizes ranging from 4 to
1605, five of which are 13 or less. The site is dated from AD 1200 to AD 1350, divided
into sub-periods of 50 years duration; the TPQs, TAQs and life-spans are highly variable
and, in general, only partially overlap the sub-periods of interest. The task Roberts et
al. set themselves is to apportion the size of an event across the sub-periods. This is
exactly the problem represented in Figure 2 where probabilities are calculated for each
type within sub-periods and multiplied by the size of the event.
Roberts et al. (2012: 1519–1520) accomplish this mathematically via a population
model, assuming either a uniform or truncated normal distribution Φi . Results, in their
Table 2, can be reproduced very closely using the sampling model, as in Section 4.2,
with an admittedly large number, K, of copies of the data set. Details are not shown
here; for brevity we examine how estimates of the assemblage total vary with K and Φi ,
Li being known.
Table 3: Estimated assemblage totals by period for the data of Roberts et al. (2012), as
the number of replications K and the date distribution model vary. The periods A–C are AD
1200–1250, AD 1250–1300 and AD 1300–1350. Results in the table to the right are based on
K = 100, 000.
Uniform
Uniform
Uniform
Uniform
Uniform
Uniform
Uniform
(K
(K
(K
(K
(K
(K
=
=
=
=
=
=
10)
102
103 )
104 )
105 )
106 )
A
684
B
772
C
830
1667
668
698
727
680
685
435
1022
561
763
770
773
184
616
1026
795
836
828
Uniform
TN(2)
A
684
855
B
772
708
C
830
723
Beta(2,2)
Beta(2,3)
Beta(2,4)
TN(2.5)
TN(3)
848
1050
1207
954
1063
719
608
650
670
629
719
538
429
662
594
The top line in the table to the left of Table 3 shows the estimated sub-period totals
determined mathematically; the rest of the table shows the variation in estimates using
the sampling model formulation as K varies. A rather large value of K > 100, 000 is
needed before results stabilise (results are obtained almost instantaneously, however).
The right-hand table shows the estimates obtained, for K = 100, 000 as Φi varies. The
top two lines are the mathematically determined estimates for the uniform distribution
and, following Roberts et al., a Normal distribution symmetrically truncated at ±2
(T N (a) denotes a Normal distribution symmetrically truncated at ±a). The message is
that results are very dependent on Φi ; Roberts et al. apply what they call a ‘demographic
adjustment’ to allow for estimated variations in the (human) population, but this does
not affect the overall conclusion. Arguably for artefact studies at least, and in general,
none of the ‘prior beliefs’ embodied in the assumed Φi are inherently unrealistic.
Why use the sampling model (with replication) when the mathematics is tractable?
In a sense this is a moot point; both give rise to the same results. The sampling model
11
methodology is arguably an inelegant ‘brute force’ approach, but then, as these things
go, the mathematical approach is hardly endowed with aesthetic appeal. The sampling
model can be readily applied, is fast, flexible and – perhaps most importantly – allows
the rapid comparison of the results from a wide variety of modelling assumptions, not all
necessarily mathematically tractable. If the view is taken in an application that events
should be equated with dates rather than date distributions the sampling model, with
repeated simulations, allows a ready assessment of sampling variation.
5.3
Aoristic analysis
In a key paper, in the field of criminology, Ratcliffe (2000) defines aoristic analysis as
‘the spatial interpretation of unspecific temporal events’. Johnson (2004) is credited with
introducing the ideas to archaeology; archaeological applications have been pursued by
Enrico Crema and colleagues (Crema, 2011, 2012; Crema et al., 2010, 2013). The focus
has mainly been on spatio-temporal analysis; here the concern is solely with temporal
analysis following, with some reinterpretation and selective emphases, the account of
Crema (2012).
Events are assumed to have a non-zero probability of existence over some period
of time. Usually date distributions are assumed to be uniform and embedded within
a much longer time-span, typically defined so that it can be divided, arbitrarily, into
time-blocks of equal length. Crema notes that time blocks of equal length and the
uniform distribution assumption are not mandatory, but the former is mathematically
convenient and the latter assumption judged to be appropriate in his application. Event
probabilities are distributed among the time-blocks in which they occur, and probabilities
for events within time blocks are summed to get what are termed aoristic weights.
The framework is identical to that illustrated in Figure 2, specialising to the uniform
distribution and equal length time-blocks.
The aoristic weights can be used in various ways, most simply by plotting them in
some way against the temporal order of the time-blocks. Figure 3 uses the 756 brooches
from the North, in Table 1, for illustration. A uniform distribution across a 100 year
life-span for a type is assumed with the period 80 BC to AD 500 divided arbitrarily
into time-blocks of 20 years. ‘Population’ aoristic weights were approximated via the
sampling model with K = 200 replicates of the data. A little experimentation suggests
K = 100 would be satisfactory so 200 is ‘on the safe side’. Thus 151,200 dates are
generated, allocated to time-blocks, counted, and divided by 200 to get aoristic weights.
Although the aoristic weights are summed probabilities and not frequencies there
seems no reason not to use a histogram-style presentation, as illustrated in Figure 3.
Numbers are rescaled so this can be thought of as a density distribution of aoristic
weights. This is overlaid in the figure with 100 simulated KDEs obtained from the
sampling model (without replicating the data). This shows that, as far as an overall
representation of temporal change goes, a continuous rather than ‘blocky’ portrayal is
equally valid. Variation in the density estimates can be removed, if one wishes, by
replicating the data set a sufficiently large number of times.
The main difference between the KDEs in Figure 3 and those for the North in Figure 1a arise from the different Φi used. They are, nevertheless, similar. It suggests that
the aoristic weights as modelled here are essentially a ‘binned’ presentation of dates,
modelled continuously if KDEs are used.
12
0.008
0.006
0.004
0.000
0.002
Density
−100
0
100
200
300
400
500
date
Figure 3: Normalised aoristic weights for the North data of Table 1, represented in the form of
a histogram with superimposed KDEs for 100 sets of simulated dates.
Crema’s (2012) article has a focus on rates of change at the transition periods between
time-blocks. Echoing a concern voiced in Crema et al. (2010), Crema (2012: 448) states
that ‘the main problem with this approach is that information related to uncertainty is
embedded in the results and as such it provides a broad “likelihood” of the intensity of
the process for single time-blocks but is unable to provide formal probabilistic description
when two or more time-blocks are directly compared’. The quite subtle argument, and
illustrative example, provided by Crema is not rehearsed here; suffice it to note that a
mathematical resolution of the problem identified is proposed, but it turns out to be
computationally infeasible.
To get round the computational problem Crema (2012: 450–451) resorts, without
using our formalism, to the sampling model (1), generating simulated sets of n dates
and basing subsequent analysis on these. For his specific problem Crema (2012: 452)
groups the dates for each simulation into time-blocks and defines a measure of change at
transitions between blocks as the difference in counts between a block and that preceding
it divided by the (common) time span of the blocks (see, also, Kolář et al., 2015).
Simulated values at transitions are summarised in a multiple boxplot. Figure 4a emulates
Crema’s Figure 5 using the brooch data for the North; the example is illustrative so only
100 simulations are used.
Figure 3 suggested that the ‘blocky’ presentational aspects of temporal aoristic analysis might be abandoned in favour of a continuous portrayal. This is equally true of the
representation of change illustrated in Figure 4a. An alternative measure of change to
that used for Figure 4a can be based on the first derivatives of the KDEs that can be
13
1e−04
3
0
−1
5e−05
0e+00
first derivative
1
−5e−05
rate of change
2
−1e−04
−2
−80
−60
−40
−20
0
20
40
60
80
100
120
140
160
180
200
220
240
260
280
300
320
340
360
380
400
420
440
460
480
−80
−60
−40
−20
0
20
40
60
80
100
120
140
160
180
200
220
240
260
280
300
320
340
360
380
400
420
440
460
480
−3
date
date
(a)
(b)
Figure 4: The left-hand figure is of multiple boxplots of estimated rates of change (as described
in the text) between time blocks of 20 years, for the North data of Table 1, based on 100
simulations. The right-hand figure is based on KDEs of the simulated data, the rate of change
being determined by the first derivatives of the KDEs.
generated from the simulated data (readily done in R) and plotted as in Figure 4b. It
is apparent that the two figures tell the same story. Arguably figures such as Figure 4b
are to be preferred, since they do not confine investigation of change to the arbitrary
transition points used in analyses like Figure 4a. The idea exploited here is, in context,
possibly new.
One final point, not illustrated for reasons of space, is that multi-scalar analysis
should not be neglected. If assumptions are adopted that ‘push’ simulated dates closer
to their TPQs, the result is predictable. More and sharper modes and anti-modes
appear in the density estimates and their derivatives. As mentioned elsewhere above,
what constitutes sensible scales at which to investigate temporal variation is contextdependent and a matter for archaeological judgment.
6
Discussion and conclusion
The phrase ‘reinventing the wheel?’ is in this paper’s title for a good reason. When we
researched alternatives to the model used in Cool and Baxter (2016) it became evident
that several other approaches motivated by similar concerns had indeed been proposed.
It would not have been a surprise, therefore, to discover that we had reinvented previous
methodology.
In the event we have yet to locate an explicit archaeological formulation of what we
term the sampling model (1), though it certainly underpins some studies. Some of the
relevant studies we are aware of are – with varying degrees of explicitness – in terms of
what is here called the population model (2), implemented mathematically (e.g. Willett,
2014). In these circumstances thinking in terms of the sampling model isn’t necessary.
Nevertheless, as has been illustrated, results from the population model can be replicated
via the sampling model by the simple device of data replication, so subsequent discussion
is based on model (1). It’s not suggested that the latter should be used in preference to
14
an appropriate population model; only that it is possible to do so, if wished.
The sampling model framework provides a simple, unifying, conceptual approach for
modelling temporal uncertainty that embraces a number of – at first sight – disparate
methods. A particular attraction is the practicality of the methodology which can be
easily and rapidly applied to explore the consequences of varying model assumptions,
not all of which need be as mathematically tractable as the normal and uniform models
are.
Adopting the uniform distribution amounts to expressing a belief – total ignorance
– about the potential date of an event within its assumed life-span. The assumption
may be unrealistic in some applications, such as artefact studies where we started from,
where the expectation is more usually that the use of types rises to a peak of popularity
and then declines. This idea can be readily incorporated in analysis within a formal
modelling framework.
The brooch study used for illustration here may act as a paradigm. The lifespan of
a type is not known with any certainty, nor is the use pattern except for beliefs related
to popularity peaks. Different scholars can hold strong and conflicting views about
these matters. Where their beliefs are both articulated coherently and are rational
they can be modelled and confronted with data, should an adequate body exist, to
explore the consequences. Model (1) lends itself to the exploration of the consequences
of different archaeological assumptions. Any modelling exercise should be undertaken
with an archaeological aim or aims in mind, and discriminating between the relative
merits of different views when confronted with good quality data may be among these
aims. Some instances of this kind of use were noted in Section 4 and further examples
using the same data can be found in Cool and Baxter (2016).
Formal modelling of temporal uncertainty has, as others have noted before us, considerable potential. It allows, if you like, narrative skeletons – well-articulated or not
– to be clothed in quantitative flesh. The body may change in appearance as the temporal resolution and other assumptions related to temporal uncertainty are varied. If
the bodies end up differing much in appearance the transparency of the modelling process provides an indication of why this is happening; you may then be forced to decide
which, if any, you prefer and why. If you go in for bodily creation it helps to have the
vision of a Frankenstein and, preferably, better tools that can be employed to better
effect. It is hoped that this paper has demonstrated that, for certain classes of problem
involving temporal uncertainty, the sampling model is a useful unifying tool, easy to
understand, flexible, practical, and capable of delivering results rapidly that can then
be contemplated in a more leisurely fashion.
Modelling is important; modelling is practical; modelling, if it is a consideration, can
be fun.
Acknowledgements
The thorough reading and constructive comments of two referees of the original draft
are gratefully acknowledged.
15
A
Appendix – Supplementary Material
A-1.1
The Beta distribution
A variety of Beta distributions are illustrated in Figure A-1.
Beta distributions
α=30, β=30
α=2, β=2
α=2, β=2
α=2, β=5
0.0
0.2
0.4
0.6
0.8
1.0
x
Figure A-1: Beta distributions for different values of α and β.
The Beta probability distribution with parameters α and β has the form
f (x) =
Γ(α + β) α−1
x
(1 − x)β−1
Γ(α)Γ(β)
for 0 ≤ x ≤ 1, where Γ(α) is the gamma function, Γ(α) = (α − 1)! for integer α, and
B(α, β) =
Γ(α)Γ(β)
Γ(α + β)
is the beta function.
The mean of the distribution is µ = α/(α+β); the variance is σ 2 = µ(1−µ)/(α+β+1);
and the mode is (α − 1)/(α + β − 2).
Using R (R Core Team, 2015) different Beta distributions can be generated, and inspected for their suitability for modelling date distributions with curve(dbeta(x,3,7)),
for example, which generates and plots the Beta distribution for α = 3 and β = 7.
The Beta distribution seems not to have been widely advertised in the quantitative
archaeological literature – it is not mentioned in introductory quantitative archaeology
16
texts. Instances we are aware of arise in Bayesian applications, as prior and posterior
distributions for a parameter in some model of interest (e.g., Buck et al., 1996: 101–102,
146–159; Robertson, 1999, Ortman, 2000, Orton, 2000; Cowgill, 2002; Millard, 2005;
Verhagen, 2007, Chapter 4); see, also, Baxter (2003: 179–182). The use of the Beta
distribution in this paper can be thought of as a prior distribution for modelling beliefs
about date distributions, with the more widely used uniform distribution as a special
case. It is used as a means of exploring the consequences of different assumptions about
the life-span of an artefact type, rather than for producing a posterior distribution for
such beliefs as might be attempted in a full Bayesian analysis.
A-1.2
R code
Generating and plotting dates
The function rbeta(n, shape1, shape2) can be used to generate random values from
the Beta distribution, where n is the event size, shape1 = α and shape2 = β.
Model (1) in the main text is
di = τ1i + Li × pi
where τ1i is the TPQ for the ith of n events; Li is its assumed life-span; and pi is a
probability determined, in the example to follow, by the parameters, α and β, of a Beta
distribution. Events may have a ‘size’ (e.g. the total number of artefacts identical in all
respects relevant to an analysis).
Create a data frame, data say, of two columns, the first of which contains the TPQs
and the second the sizes of the events. The following function repeatedly simulates sets
of dates and plots these as KDEs.
myplot <- function(X, Life = 100, Alpha = 1, Beta = 1, nsim = 99, BW = 10,
K = 1, Ylim = .010) {
n <D <size
L <-
nrow(X)
X[,1]
<- X[,2] * K
rep(Life, n)
# TPQs
# Modify if non-constant L needed
# Function to simulate one set of dates
date.sim <- function() {
simD <- NULL
for(i in seq(1, n, 1)) {
simD <- c(simD, D[i] + L[i] * rbeta(size[i], Alpha, Beta))
}
simD
}
# Now plot for repeated (nsim) simulations
date.sim.plot <- function() {
for(j in seq(1, nsim, 1)) {
simdates <- date.sim()
17
lines(density(simdates, bw = BW)$x, density(simdates, bw = BW)$y)
}
}
plot(NULL, xlim=c(-100, 500), ylim=c(0, Ylim), ylab="density",
xlab="date", main="")
date.sim.plot()
}
The arguments are X = the n × 2 data set; Life = assumed common life-span L;
Alpha, Beta = parameters of the Beta distribution; nsim = number of simulations to
run; BW = bandwidth of the KDEs; K = a multiplication factor for the size vector; Ylim
= limit of y-axis, a suitable value for which may need a little experimentation.
The function date.sim generates random dates, the total being the sum of the event
sizes; date.sim.plot generates and plots KDEs for nsim such simulated data sets. The
default is to generate plots assuming a uniform distribution with a common life-span of
100 years; the following
myplot(data, Alpha=2, Beta = 2)
would provide an analysis with α = β = 2 assuming a Beta distribution, other arguments
being set to their defaults.
If the truncated Normal distribution rather than the Beta distribution is used, confining attention to standard Normal distributions symmetrically truncated at ±a, denoted
by zT N ∼ T N (a), random probabilities between 0 and 1 can be generated using the
transformation (zT N + a)/2a. The rtruncnorm function from the truncnorm package
(Trautmann et al., 2014) can be used to generate random values. In the date.sim
function use
a <- 2
L[i]*(rtruncnorm(size[i], -a, a) + a)/2*a)}
in place of L[i] * rbeta(size[i], Alpha, Beta) The value a = 2 has been used for
illustration and could be provided by an argument in myplot.
The line L <- rep(Life, n) can be modified to accommodate other assumptions;
for example L <- runif(n, 20, 90) would generate n random life-spans sampled from
a uniform distribution between 20 and 90 years. For the ‘population’ model some experimentation with K > 1 may be needed, this determining the total number of random
dates generated.
For plots of the first derivatives the kdde function from the ks package can be used
(Tarn, 2015). Add a new function, date.sim.plot.deriv identical to date.sim.plot
except that the lines command is replaced with
lines(kdde(simdates, deriv.order = 1)$eval.points, kdde(simdates,
deriv.order=1)$estimate)
followed by
library(ks)
plot(NULL, xlim=c(-.00012, .00013), ylim=c(0, 0.010))
date.sim.plot.deriv(Alpha = 1, Beta = 1)
where, as before, experimentation with the plot limits will be needed.
18
Aoristic analysis
Continuing with the example, the density histogram of aoristic weights in Figure 3 of
the main text can be obtained using something like
simdates <- date.sim()
Breaks <- seq(-80, 500, 20)
H <- hist(simdates, breaks = Breaks , plot = F)
plot(H, freq = F, xlab = "date", ylab = "density")
where the data have been replicated K = 1000 times for illustration. If the unscaled
aoristic weights are needed they can be obtained from H$counts divided by the value of
K.
The above example is for the uniform distribution assuming time-blocks of equal
length; for time-blocks of unequal lengths define Breaks to be a vector of the transition
points, including the start and end dates. This might best be embedded in a separate
function, written as above down to the date.sim function, since it is likely that one
would want to use a value of K that differs from other analyses.
References
Baxter, M. J. (2003) Statistics in Archaeology. London: Arnold.
Baxter, M. J., Beardah, C. C. and Wright, R. V. S. (1997) Some archaeological applications of kernel density estimates. Journal of Archaeological Science 24, 347–354.
Bevan, A. and Conolly, J. (2006) Multi-scalar approaches to settlement pattern analysis.
In Lock, G. and Molyneaux B. (eds.) Confronting Scale in Archaeology: Issues of
Theory and Practice. New York: Springer, 217–234.
Bevan, A. and Conolly, J. (2009) Modelling spatial heterogeneity and nonstationarity
in artifact-rich landscapes. Journal of Archaeological Science, 956–964.
Bevan, A., Crema, E., Li, X. and Palmisano, A. (2013) Intensities, interactions and
uncertainties: some new approaches to archaeological distributions, in Bevan, A.
and Lake, M. (eds.), Computational Approaches to Archaeological Spaces. Walnut
Creek: Left Coast Press, 27–51.
Bevan, A., Conolly, J., Hennig, C., Johnston, A., Quercia, A. Spencer, L. and Vroom, J.
(2013) Measuring chronological uncertainty in intensive survey finds. A case study
from Antikythera, Greece. Archaeometry, 55, 312–328.
Buck, C. E., Cavanagh, W. G. and Litton, C. D. (1996) Bayesian Approach to Interpreting Archaeological Data. Chichester: Wiley.
Cool, H. E. M. and Baxter, M. J. (2016). Brooches and Britannia. Britannia 47, 71–98.
Contreras, D. A. and Meadows, J. (2014) Summed radiocarbon calibrations as a population proxy: a critical evaluation using a realistic simulation approach. Journal of
Archaeological Science, 52, 591–608.
19
Cowgill, G. L. (2002) Getting Bayesian ideas across to a wide audience. Archeologia e
Calcolatori 13, 191–196.
Crema, E. R. (2011) Aoristic approaches and voxel models for spatial analysis. In
Jerem, E., Redoö, F. and Szeverényi, V. (eds.), On the Road to Reconstructing the
Past. In Proceedings of the 36th Annual Conference on Computer Applications and
Quantitative Methods in Archaeology. Budapest: Archeolingua. 179–186.
Crema, E. R. (2012) Modelling temporal uncertainty in archaeological analysis. Journal
of Archaeological Method and Theory, 19, 440-461.
Crema, E. R. (2015) Time and probabilistic reasoning in settlement analysis. In Barceló,
J. A. and Bogdanovich, I. (eds.), Mathematics and Archaeology. Boca Raton: CRC
Press, 314–334.
Crema, E. R., Bevan, A. and Lake, M. (2010) A probabilistic framework for assessing
spatio-temporal point patterns in the archaeological record. Journal of Archaeological Science, 37, 1118–1130.
Grove, M. (2011) A spatio-temporal kernel method for mapping changes in prehistoric
land use patterns. Archaeometry, 53, 1012–1030.
Johnson, I. (2004) Aoristic analysis: seeds of a new approach to mapping archaeological distributions through time. In Ausserer, K. F., Börner, W., Goriany, M. and
Karlhuber-Vöckl, L. (eds.), Enter the Past: the E-way into the Four Dimensions
of Cultural Heritage: CAA2003. Oxford: Archaeopress. BAR International Series
1227, 448–452.
Kolář, J, Macek, M., Tkáč, P. and Szabó, P. (2015) Spatio-temporal modelling as a way
to reconstruct patterns of past human activities. Archaeometry. Published online:
May 2015, DOI: 10.1111/arcm.12182
Mackreth, D. F. (2011) Brooches in Late Iron Age and Roman Britain. Oxford: Oxbow
Books.
Manning, K., Colledge, S., Crema, E., Edinborough, K., Kerig, T. and Shennan, S.
(2014) The chronology of culture: a comparative assessment of European Neolithic
dating approaches. Antiquity, 88, 1065-1080.
Markofsky, S. (2014) When survey goes East: field survey methodologies and analytical frameworks in a Central Asian context. Journal of Archaeological Method and
Theory, 21
Millard, A. R. (2005) What can Bayesian statistics do for archaeological predictive
modelling?, In P. M. van Leusen and H. Kamermans (eds.), Predictive Modelling for
Archaeological Heritage Management: A Research Agenda. Amersfoort: Rijkdienst
voor het Oudheidkundig Bodemonderzoek, 169–182.
Orton, C. (2000) A Bayesian approach to a problem of archaeological site evaluation.
In Lockyear, K., Sly, T. J. T. and Mihăilescu-Bı̂rliba, V. (eds.), CAA96: Computer
Applications and Quantitative Methods in Archaeology. BAR International Series
845, Oxford: Archaeopress, 1–7.
20
Ortman, S. G. (2000) Conceptual metaphor in the archaeological record: methods and
an example from the American Southwest. American Antiquity, 65, 613–645.
Palmisano, A. (2013) Zooming patterns among the scales: a statistics technique
to detect spatial patterns among settlements. In Earl, G,, Sly, T., Chrysanthi,
A., Murrieta-Flores, P., Papadopoulos, C., Romanowska, I. and Wheatley, D.
(eds.)Archaeology in the Digital Era. Oapers from the 40th Annual Conference in
Computer Applications and Quantitative Methods in Archaeology, Southampton,
26-29 March 2012. Amsterdam: Amsterdam University Press, 348–356.
R Core Team (2015) R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org
Ratcliffe, J. H. (2000) Aoristic analysis: the spatial interpretation of unspecifed temporal events. International Journal of Geographical Information Science, 14, 669–679.
Roberts Jr., J. M, Mills, B. J., Clark, J. J., Haas Jr., W, R., Huntley, D. L. and
Trowbridge, M. A. (2012) A method for chronological apportioning of ceramic assemblages. Journal of Archaeological Science, 39, 1513–1520.
Robertson, I. G. (1999) Spatial and multivariate analysis, random sampling error, and
analytical noise: empirical Bayesian methods at Teotihuacan, Mexico. American
Antiquity, 64, 137–152.
Shennan, S., Downey, S. S., A. Timpson, A., Edinborough, K., Colledge, S., Kerig,
T., Manning, K., and Thomas, M. G. 2013 Regional population collapse followed
initial agriculture booms in mid-Holocene Europe. Nature Communications. DOI:
10.1038/ncomms3486 —www.nature.com/naturecommunications
Surovell, T. A. and Brantingham, P. J. (2007). A note on the use of temporal frequency distributions in studies of prehistoric demography. Journal of Archaeological
Science, 34, 1868–1877.
Surovell, T. A., Finley, J. B., Smith, G. M., Brantingham, P. J. and Kelly, R. (2009)
Correcting temporal frequency distributions for taphonomic bias. Journal of Archaeological Science 36, 1715-1724.
Tarn, D. (2015) ks: Kernel Smoothing. R package version 1.9.4, url = http://CRAN.Rproject.org/package=ks
Timpson, A., Colledge, S., Crema, E., Edinborough, K., Kerig, T., Manning, K.,
Thomas, M.G. and Shennan, S. (2014) Reconstructing regional demographies of the
European neolithic using radiocarbon dates: a new case-study using an improved
method. Journal of Archaeological Science, 52, 549–557.
Timpson, A., Manning, K. and Shennan, S. (2015) Inferential mistakes in population
proxies: A response to Torfing’s “Neolithic population and summed probability
distribution of 14 C-dates”. Journal of Archaeological Science, 63, 199–202.
Torfing, T. (2015a) Neolithic population and summed probability distribution of
dates. Journal of Archaeological Science, 63, 193–198.
21
14 C-
Torfing, T. (2015b) Layers of assumptions: A reply to Timpson, Manning, and Shennan.
Journal of Archaeological Science, 63, 203–205.
Trautmann, H., Steuer, D., Mersmann, O. and Bornkamp, B. (2014) truncnorm:
Truncated normal distribution, R package version 1.0-7, url = http://CRAN.Rproject.org/package=truncnorm
Verhagen, P. (2007) Case Studies in Archaeological Predictive Modelling. Leiden: Leiden University Press.
Willet, R. (2014) Experiments with diachronic data distribution methods applied to
Eastern Sigillata A in the eastern Mediterranean. Herom 3, 39–69.
Williams, A. N. (2012) The use of summed radiocarbon probability distributions in
archaeology: a review of methods. Journal of Archaeological Science, 39, 578–589.
Woodbridge, J., Fyfe, R. M., Roberts, N., Downey, S., Edinborough, K. and Shennan,
S. (2014) The impact of the Neolithic agricultural transition in Britain: a comparison of pollen-based land-cover and archaeological 14 C date-inferred population
change. Journal of Archaeological Science, 51, 216–224.
22